Goto

Collaborating Authors

 optimization parameter







Neural Port-Hamiltonian Models for Nonlinear Distributed Control: An Unconstrained Parametrization Approach

Zakwan, Muhammad, Ferrari-Trecate, Giancarlo

arXiv.org Artificial Intelligence

The control of large-scale cyber-physical systems requires optimal distributed policies relying solely on limited communication with neighboring agents. However, computing stabilizing controllers for nonlinear systems while optimizing complex costs remains a significant challenge. Neural Networks (NNs), known for their expressivity, can be leveraged to parametrize control policies that yield good performance. However, NNs' sensitivity to small input changes poses a risk of destabilizing the closed-loop system. Many existing approaches enforce constraints on the controllers' parameter space to guarantee closed-loop stability, leading to computationally expensive optimization procedures. To address these problems, we leverage the framework of port-Hamiltonian systems to design continuous-time distributed control policies for nonlinear systems that guarantee closed-loop stability and finite $\mathcal{L}_2$ or incremental $\mathcal{L}_2$ gains, independent of the optimzation parameters of the controllers. This eliminates the need to constrain parameters during optimization, allowing the use of standard techniques such as gradient-based methods. Additionally, we discuss discretization schemes that preserve the dissipation properties of these controllers for implementation on embedded systems. The effectiveness of the proposed distributed controllers is demonstrated through consensus control of non-holonomic mobile robots subject to collision avoidance and averaged voltage regulation with weighted power sharing in DC microgrids.


Automated radiotherapy treatment planning guided by GPT-4Vision

Liu, Sheng, Pastor-Serrano, Oscar, Chen, Yizheng, Gopaulchan, Matthew, Liang, Weixing, Buyyounouski, Mark, Pollom, Erqi, Le, Quynh-Thu, Gensheimer, Michael, Dong, Peng, Yang, Yong, Zou, James, Xing, Lei

arXiv.org Artificial Intelligence

Radiotherapy treatment planning is a time-consuming and potentially subjective process that requires the iterative adjustment of model parameters to balance multiple conflicting objectives. Recent advancements in large foundation models offer promising avenues for addressing the challenges in planning and clinical decision-making. This study introduces GPT-RadPlan, a fully automated treatment planning framework that harnesses prior radiation oncology knowledge encoded in multi-modal large language models, such as GPT-4Vision (GPT-4V) from OpenAI. GPT-RadPlan is made aware of planning protocols as context and acts as an expert human planner, capable of guiding a treatment planning process. Via in-context learning, we incorporate clinical protocols for various disease sites as prompts to enable GPT-4V to acquire treatment planning domain knowledge. The resulting GPT-RadPlan agent is integrated into our in-house inverse treatment planning system through an API. The efficacy of the automated planning system is showcased using multiple prostate and head & neck cancer cases, where we compared GPT-RadPlan results to clinical plans. In all cases, GPT-RadPlan either outperformed or matched the clinical plans, demonstrating superior target coverage and organ-at-risk sparing. Consistently satisfying the dosimetric objectives in the clinical protocol, GPT-RadPlan represents the first multimodal large language model agent that mimics the behaviors of human planners in radiation oncology clinics, achieving remarkable results in automating the treatment planning process without the need for additional training.


Interaction-Aware Sensitivity Analysis for Aerodynamic Optimization Results using Information Theory

Wollstadt, Patricia, Schmitt, Sebastian

arXiv.org Artificial Intelligence

An important issue during an engineering design process is to develop an understanding which design parameters have the most influence on the performance. Especially in the context of optimization approaches this knowledge is crucial in order to realize an efficient design process and achieve high-performing results. Information theory provides powerful tools to investigate these relationships because measures are model-free and thus also capture non-linear relationships, while requiring only minimal assumptions on the input data. We therefore propose to use recently introduced information-theoretic methods and estimation algorithms to find the most influential input parameters in optimization results. The proposed methods are in particular able to account for interactions between parameters, which are often neglected but may lead to redundant or synergistic contributions of multiple parameters. We demonstrate the application of these methods on optimization data from aerospace engineering, where we first identify the most relevant optimization parameters using a recently introduced information-theoretic feature-selection algorithm that accounts for interactions between parameters. Second, we use the novel partial information decomposition (PID) framework that allows to quantify redundant and synergistic contributions between selected parameters with respect to the optimization outcome to identify parameter interactions. We thus demonstrate the power of novel information-theoretic approaches in identifying relevant parameters in optimization runs and highlight how these methods avoid the selection of redundant parameters, while detecting interactions that result in synergistic contributions of multiple parameters.


Surrogate Models for Optimization of Dynamical Systems

Khowaja, Kainat, Shcherbatyy, Mykhaylo, Härdle, Wolfgang Karl

arXiv.org Machine Learning

Driven by increased complexity of dynamical systems, the solution of system of differential equations through numerical simulation in optimization problems has become computationally expensive. This paper provides a smart data driven mechanism to construct low dimensional surrogate models. These surrogate models reduce the computational time for solution of the complex optimization problems by using training instances derived from the evaluations of the true objective functions. The surrogate models are constructed using combination of proper orthogonal decomposition and radial basis functions and provides system responses by simple matrix multiplication. Using relative maximum absolute error as the measure of accuracy of approximation, it is shown surrogate models with latin hypercube sampling and spline radial basis functions dominate variable order methods in computational time of optimization, while preserving the accuracy. These surrogate models also show robustness in presence of model non-linearities. Therefore, these computational efficient predictive surrogate models are applicable in various fields, specifically to solve inverse problems and optimal control problems, some examples of which are demonstrated in this paper.


Neural network augmented inverse problems for PDEs

Berg, Jens, Nyström, Kaj

arXiv.org Machine Learning

In this paper we study the classical coefficient approximation problem for partial differential equations (PDEs). The inverse problem consists of determining the coefficient(s) of a PDE given more or less noisy measurements of its solution. A typical example is the heat distribution in a material with unknown thermal conductivity. Given measurements of the temperature at certain locations, we are to estimate the thermal conductivity of the material by solving the inverse problem for the stationary heat equation. The problem is of both practical and theoretical interest. From a practical point of view, the governing equation for some physical process is often known, but the material, electrical, or other properties are not. From a theoretical point of view, the inverse problem is a challenging often ill-posed problem in the sense of Hadamard. Inverse problems have been studied for a long time, starting with Levenberg [24] in the 40's, Marquardt [29] and Kac [20] in the 60's, to being popularized by Tikhonov [36] in the 70's by his work on regularization techniques.